Conference Proceedings

The vulnerability of learning to adversarial perturbation increases with intrinsic dimensionality

L Amsaleg, J Bailey, D Barbe, S Erfani, ME Houle, V Nguyen, M Radovanovic

NII Technical Reports | IEEE Explore | Published : 2018

Abstract

© 2017 IEEE. Recent research has shown that machine learning systems, including state-of-the-art deep neural networks, are vulnerable to adversarial attacks. By adding to the input object an imperceptible amount of adversarial noise, it is highly likely that the classifier can be tricked into assigning the modified object to any desired class. It has also been observed that these adversarial samples generalize well across models. A complete understanding of the nature of adversarial samples has not yet emerged. Towards this goal, we present a novel theoretical result formally linking the adversarial vulnerability of learning to the intrinsic dimensionality of the data. In particular, our inv..

View full abstract

University of Melbourne Researchers

Grants

Awarded by University of Melbourne


Funding Acknowledgements

Laurent Amsaleg is in part supported by the European CHIST-ERA ID_IOT project. James Bailey, Sarah Erfani and Vinh Nguyen are in part supported by the Australian Research Council via grant number DP140101969. Vinh Nguyen is in part supported by a University of Melbourne ECR grant. Michael E. Houle is in part supported by JSPS Kakenhi Kiban (A) Research Grant 25240036 and Kiban (B) Research Grant 15H02753. Milos Radovanovic is in part supported by the Serbian Ministry of Education, Science and Technological Development through project number OI174023.